Conversational text-to-SQL is designed to translate multi-turn natural language questions into their corresponding SQL queries. Most state-of-the-art conversational text- to-SQL methods are incompatible with generative pre-trained language models (PLMs), such as T5. In this paper, we present a two-stage unified MultI-task Generation frAmework (MIGA) that leverages PLMs' ability to tackle conversational text-to-SQL. In the pre-training stage, MIGA first decomposes the main task into several related sub-tasks and then unifies them into the same sequence-to-sequence (Seq2Seq) paradigm with task-specific natural language prompts to boost the main task from multi-task training. Later in the fine-tuning stage, we propose four SQL perturbations to alleviate the error propagation problem. MIGA tends to achieve state-of-the-art performance on two benchmarks (SparC and CoSQL). We also provide extensive analyses and discussions to shed light on some new perspectives for conversational text-to-SQL.
translated by 谷歌翻译
The long-standing theory that a colour-naming system evolves under the dual pressure of efficient communication and perceptual mechanism is supported by more and more linguistic studies including the analysis of four decades' diachronic data from the Nafaanra language. This inspires us to explore whether artificial intelligence could evolve and discover a similar colour-naming system via optimising the communication efficiency represented by high-level recognition performance. Here, we propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining the accuracy of machine recognition on the quantised images. Given an RGB image, Annotation Branch maps it into an index map before generating the quantised image with a colour palette, meanwhile the Palette Branch utilises a key-point detection way to find proper colours in palette among whole colour space. By interacting with colour annotation, CQFormer is able to balance both the machine vision accuracy and colour perceptual structure such as distinct and stable colour distribution for discovered colour system. Very interestingly, we even observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages. Besides, our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage while maintaining a high performance in high-level recognition tasks such as classification and detection. Extensive experiments demonstrate the superior performance of our method with extremely low bit-rate colours. We will release the source code soon.
translated by 谷歌翻译
The problem of covariate-shift generalization has attracted intensive research attention. Previous stable learning algorithms employ sample reweighting schemes to decorrelate the covariates when there is no explicit domain information about training data. However, with finite samples, it is difficult to achieve the desirable weights that ensure perfect independence to get rid of the unstable variables. Besides, decorrelating within stable variables may bring about high variance of learned models because of the over-reduced effective sample size. A tremendous sample size is required for these algorithms to work. In this paper, with theoretical justification, we propose SVI (Sparse Variable Independence) for the covariate-shift generalization problem. We introduce sparsity constraint to compensate for the imperfectness of sample reweighting under the finite-sample setting in previous methods. Furthermore, we organically combine independence-based sample reweighting and sparsity-based variable selection in an iterative way to avoid decorrelating within stable variables, increasing the effective sample size to alleviate variance inflation. Experiments on both synthetic and real-world datasets demonstrate the improvement of covariate-shift generalization performance brought by SVI.
translated by 谷歌翻译
Frozen pretrained models have become a viable alternative to the pretraining-then-finetuning paradigm for transfer learning. However, with frozen models there are relatively few parameters available for adapting to downstream tasks, which is problematic in computer vision where tasks vary significantly in input/output format and the type of information that is of value. In this paper, we present a study of frozen pretrained models when applied to diverse and representative computer vision tasks, including object detection, semantic segmentation and video action recognition. From this empirical analysis, our work answers the questions of what pretraining task fits best with this frozen setting, how to make the frozen setting more flexible to various downstream tasks, and the effect of larger model sizes. We additionally examine the upper bound of performance using a giant frozen pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches competitive performance on a varied set of major benchmarks with only one shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7 top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to bring greater attention to this promising path of freezing pretrained image models.
translated by 谷歌翻译
本文开发了一个深图运算符网络(DeepGraphonet)框架,该框架学会了近似具有基础子图形结构的复杂系统(例如电网或流量)的动力学。我们通过融合(i)图形神经网络(GNN)来利用空间相关的图形信息和(ii)深操作符网络〜(deeponet)近似动态系统的解决方案操作员的能力来构建深图载体。然后,所得的深图载体可以通过观察图形状态信息的有限历史来预测给定的短/中期时间范围内的动力学。此外,我们将深图载体设计为独立于解决方案。也就是说,我们不需要以精确/相同的分辨率收集有限的历史记录。此外,为了传播训练有素的Deepgraphonet的结果,我们设计了一种零摄像的学习策略,可以在不同的子图上使用它。最后,对(i)瞬态稳定性预测电网和(ii)车辆系统的交通流量预测问题的经验结果说明了拟议的Deepgraphonet的有效性。
translated by 谷歌翻译
我们介绍了第一个基于学习的可重建性预测指标,以改善使用无人机的大规模3D城市场景获取的视图和路径计划。与以前的启发式方法相反,我们的方法学习了一个模型,该模型明确预测了从一组观点重建3D城市场景的能力。为了使这种模型可训练并同时适用于无人机路径计划,我们在培训期间模拟了基于代理的3D场景重建以设置预测。具体而言,我们设计的神经网络经过训练,可以预测场景的重构性,这是代理几何学的函数,一组观点,以及在飞行中获得的一系列场景图像。为了重建一个新的城市场景,我们首先构建了3D场景代理,然后依靠我们网络的预测重建质量和不确定性度量,基于代理几何形状,以指导无人机路径计划。我们证明,与先前的启发式措施相比,我们的数据驱动的可重建性预测与真实的重建质量更加紧密相关。此外,我们学到的预测变量可以轻松地集成到现有的路径计划中,以产生改进。最后,我们根据学习的可重建性设计了一个新的迭代视图计划框架,并在重建合成场景和真实场景时展示新计划者的卓越性能。
translated by 谷歌翻译
尽管将进化计算整合到增强学习中的新进展,但缺乏高性能平台可赋予合成性和大规模的并行性,这对与异步商业游戏相关的研究和应用造成了非平凡的困难。在这里,我们介绍了Lamarckian-一个开源平台,其支持进化增强学习可扩展到分布式计算资源的支持。为了提高训练速度和数据效率,拉马克人采用了优化的通信方法和异步进化增强学习工作流程。为了满足商业游戏和各种方法对异步界面的需求,Lamarckian量身定制了异步的马尔可夫决策过程界面,并设计了带有脱钩模块的面向对象的软件体系结构。与最先进的RLLIB相比,我们从经验上证明了Lamarckian在基准测试中具有多达6000 CPU核心的独特优势:i)i)在Google足球游戏上运行PPO时,采样效率和训练速度都翻了一番; ii)在乒乓球比赛中运行PBT+PPO时,训练速度的速度快13倍。此外,我们还提出了两种用例:i)如何将拉马克安应用于生成行为多样性游戏AI; ii)Lamarckian如何应用于游戏平衡测试的异步商业游戏。
translated by 谷歌翻译
深度学习模型已在大规模视频基准测试上取得了出色的识别结果。但是,当应用于稀有场景或物体的视频时,它们的性能很差,这主要是由于现有视频数据集的偏见。我们从两个不同的角度解决了这个问题:算法和数据集。从算法的角度来看,我们提出了空间感知的多种偏见(SMAD),它既将明确的偏见都与多种相对的对抗性训练和隐含的偏见以及与空间行动重新重量的模块相结合,从行动方面。为了消除内在的数据集偏差,我们建议OmnideBias有选择地利用Web数据进行联合培训,这可以通过更少的Web数据实现更高的性能。为了验证有效性,我们建立评估协议并对现有数据集的重新分配分配和新的评估数据集进行广泛的实验,该数据集的重点是稀有场景。我们还表明,当转移到其他数据集和任务时,辩护形式可以更好地概括。
translated by 谷歌翻译
最近,深度神经网络(DNNS)在现实世界图像超分辨率(SR)方面取得了重大成功。但是,具有准侵蚀噪声的对抗图像样本可能威胁到深度学习的SR模型。在本文中,我们为现实世界SR提出了一个强大的深度学习框架,该框架随机消除了输入图像或功能的频域中潜在的对抗噪声。理由是,在SR任务上,清洁图像或功能与频域中受攻击的图案不同。观察到现有的对抗攻击通常会为输入图像增加高频噪声,我们引入了一个新型的随机频率掩码模块,该模块可以以随机方式阻止可能包含有害扰动的高频组件。由于频率掩蔽不仅可能会破坏对抗性扰动,而且还会影响干净的图像中的尖锐细节,我们进一步基于图像的频域开发了对抗性样品分类器,以确定是否应用了提出的掩码模块。基于上述想法,我们设计了一个新颖的现实世界图像SR框架,该框架结合了建议的频率掩盖模块和所提出的对抗分类器与现有的超分辨率骨干网络。实验表明,我们所提出的方法对对抗性攻击更加不敏感,并且比现有模型和防御能力更稳定。
translated by 谷歌翻译
知识蒸馏(KD)将知识从高容量的教师网络转移到加强较小的学生。现有方法着重于发掘知识的提示,并将整个知识转移给学生。但是,由于知识在不同的学习阶段显示出对学生的价值观,因此出现了知识冗余。在本文中,我们提出了知识冷凝蒸馏(KCD)。具体而言,每个样本上的知识价值是动态估计的,基于期望最大化(EM)框架的迭代性凝结,从老师那里划定了一个紧凑的知识,以指导学生学习。我们的方法很容易建立在现成的KD方法之上,没有额外的培训参数和可忽略不计的计算开销。因此,它为KD提出了一种新的观点,在该观点中,积极地识别教师知识的学生可以学会更有效,有效地学习。对标准基准测试的实验表明,提出的KCD可以很好地提高学生模型的性能,甚至更高的蒸馏效率。代码可在https://github.com/dzy3/kcd上找到。
translated by 谷歌翻译